74 research outputs found

    MCMC inference for Markov Jump Processes via the Linear Noise Approximation

    Full text link
    Bayesian analysis for Markov jump processes is a non-trivial and challenging problem. Although exact inference is theoretically possible, it is computationally demanding thus its applicability is limited to a small class of problems. In this paper we describe the application of Riemann manifold MCMC methods using an approximation to the likelihood of the Markov jump process which is valid when the system modelled is near its thermodynamic limit. The proposed approach is both statistically and computationally efficient while the convergence rate and mixing of the chains allows for fast MCMC inference. The methodology is evaluated using numerical simulations on two problems from chemical kinetics and one from systems biology

    Probabilistic Numerics and Uncertainty in Computations

    Full text link
    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data has led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimisers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.Comment: Author Generated Postprint. 17 pages, 4 Figures, 1 Tabl

    Inferring networks from time series: a neural approach

    Full text link
    Network structures underlie the dynamics of many complex phenomena, from gene regulation and foodwebs to power grids and social media. Yet, as they often cannot be observed directly, their connectivities must be inferred from observations of their emergent dynamics. In this work we present a powerful and fast computational method to infer large network adjacency matrices from time series data using a neural network. Using a neural network provides uncertainty quantification on the prediction in a manner that reflects both the non-convexity of the inference problem as well as the noise on the data. This is useful since network inference problems are typically underdetermined, and a feature that has hitherto been lacking from network inference methods. We demonstrate our method's capabilities by inferring line failure locations in the British power grid from observations of its response to a power cut. Since the problem is underdetermined, many classical statistical tools (e.g. regression) will not be straightforwardly applicable. Our method, in contrast, provides probability densities on each edge, allowing the use of hypothesis testing to make meaningful probabilistic statements about the location of the power cut. We also demonstrate our method's ability to learn an entire cost matrix for a non-linear model from a dataset of economic activity in Greater London. Our method outperforms OLS regression on noisy data in terms of both speed and prediction accuracy, and scales as N2N^2 where OLS is cubic. Since our technique is not specifically engineered for network inference, it represents a general parameter estimation scheme that is applicable to any parameter dimension

    The silicon trypanosome

    Get PDF
    African trypanosomes have emerged as promising unicellular model organisms for the next generation of systems biology. They offer unique advantages, due to their relative simplicity, the availability of all standard genomics techniques and a long history of quantitative research. Reproducible cultivation methods exist for morphologically and physiologically distinct life-cycle stages. The genome has been sequenced, and microarrays, RNA-interference and high-accuracy metabolomics are available. Furthermore, the availability of extensive kinetic data on all glycolytic enzymes has led to the early development of a complete, experiment-based dynamic model of an important biochemical pathway. Here we describe the achievements of trypanosome systems biology so far and outline the necessary steps towards the ambitious aim of creating a , a comprehensive, experiment-based, multi-scale mathematical model of trypanosome physiology. We expect that, in the long run, the quantitative modelling enabled by the Silicon Trypanosome will play a key role in selecting the most suitable targets for developing new anti-parasite drugs

    Infinite factorization of multiple non-parametric views

    Get PDF
    Combined analysis of multiple data sources has increasing application interest, in particular for distinguishing shared and source-specific aspects. We extend this rationale of classical canonical correlation analysis into a flexible, generative and non-parametric clustering setting, by introducing a novel non-parametric hierarchical mixture model. The lower level of the model describes each source with a flexible non-parametric mixture, and the top level combines these to describe commonalities of the sources. The lower-level clusters arise from hierarchical Dirichlet Processes, inducing an infinite-dimensional contingency table between the views. The commonalities between the sources are modeled by an infinite block model of the contingency table, interpretable as non-negative factorization of infinite matrices, or as a prior for infinite contingency tables. With Gaussian mixture components plugged in for continuous measurements, the model is applied to two views of genes, mRNA expression and abundance of the produced proteins, to expose groups of genes that are co-regulated in either or both of the views. Cluster analysis of co-expression is a standard simple way of screening for co-regulation, and the two-view analysis extends the approach to distinguishing between pre- and post-translational regulation

    Quantification of functionalised gold nanoparticle-targeted knockdown of gene expression in HeLa cells

    Get PDF
    Introduction: Gene therapy continues to grow as an important area of research, primarily because of its potential in the treatment of disease. One significant area where there is a need for better understanding is in improving the efficiency of oligonucleotide delivery to the cell and indeed, following delivery, the characterization of the effects on the cell. Methods: In this report, we compare different transfection reagents as delivery vehicles for gold nanoparticles functionalized with DNA oligonucleotides, and quantify their relative transfection efficiencies. The inhibitory properties of small interfering RNA (siRNA), single-stranded RNA (ssRNA) and single-stranded DNA (ssDNA) sequences targeted to human metallothionein hMT-IIa are also quantified in HeLa cells. Techniques used in this study include fluorescence and confocal microscopy, qPCR and Western analysis. Findings: We show that the use of transfection reagents does significantly increase nanoparticle transfection efficiencies. Furthermore, siRNA, ssRNA and ssDNA sequences all have comparable inhibitory properties to ssDNA sequences immobilized onto gold nanoparticles. We also show that functionalized gold nanoparticles can co-localize with autophagosomes and illustrate other factors that can affect data collection and interpretation when performing studies with functionalized nanoparticles. Conclusions: The desired outcome for biological knockdown studies is the efficient reduction of a specific target; which we demonstrate by using ssDNA inhibitory sequences targeted to human metallothionein IIa gene transcripts that result in the knockdown of both the mRNA transcript and the target protein
    corecore